-
Notifications
You must be signed in to change notification settings - Fork 0
feat: txgossip package
#26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…l be much easier for everyone
Co-authored-by: Stephen Buttolph <[email protected]> Signed-off-by: Arran Schlosberg <[email protected]>
Co-authored-by: Stephen Buttolph <[email protected]> Signed-off-by: Arran Schlosberg <[email protected]>
Co-authored-by: Stephen Buttolph <[email protected]> Signed-off-by: Arran Schlosberg <[email protected]>
Co-authored-by: Stephen Buttolph <[email protected]> Signed-off-by: Arran Schlosberg <[email protected]>
Co-authored-by: Stephen Buttolph <[email protected]> Signed-off-by: Arran Schlosberg <[email protected]>
Co-authored-by: Stephen Buttolph <[email protected]> Signed-off-by: Arran Schlosberg <[email protected]>
StephenButtolph
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will continue reviewing tomorrow
txgossip/txgossip.go
Outdated
| // to release resources. | ||
| func NewSet(logger logging.Logger, pool *txpool.TxPool, bloom *gossip.BloomFilter, targetBloomElements int) (*Set, func() error) { | ||
| txs := make(chan core.NewTxsEvent) | ||
| sub := pool.SubscribeTransactions(txs, false) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems like this only receives txs upon mempool reorgs. I would have thought that the bloom filter should update as soon as we add the tx to the pool to avoid unnecessary gossip
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've demonstrated in TestTxPoolPopulatesBloomFilter() that it will send new txs as well and that the filter receives them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, my original comment wasn't questioning whether or not the txs would get added to the bloom filter eventually. My comment was questioning whether or not it is smart to wait until a mempool reorg happens to populate the bloom filter.
In all of our current mempool implementations, we include the transactions into the bloom filter prior to returning from Add.
| for _, tx := range ev.Txs { | ||
| bloom.Add(Transaction{tx}) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This only seems to add the newly issued transactions into the bloom filter, don't we need to reintroduce all the transactions into the bloom filter?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup; this is now done in NewSet() and tested in TestTxPoolPopulatesBloomFilter().
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While I think it is good that we initialize the bloom filter in NewSet, I was referring to populating the bloom filter with existing transactions in the mempool after we reset the bloom filter.
ev.Txs doesn't include transactions that were previously in the mempool. So whenever we reset the bloom filter, we are going to start receiving (permanently, until those txs are removed from the mempool) duplicate tx gossip.
txgossip/txgossip.go
Outdated
| for { | ||
| select { | ||
| case ev := <-txs: | ||
| if _, err := gossip.ResetBloomFilterIfNeeded(bloom, targetBloomElements); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is probably more robust to use a multiple of the current mempool size as a lower bound here. Otherwise we could get into a situation where we end up resetting every time we see a new tx
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ahh, nice idea. I had no idea how to set this (hence making it an argument). I've arbitrarily gone with 2x pool size but could always reintroduce an argument to configure that.
| select { | ||
| case b := <-p.in: | ||
| select { | ||
| case p.out <- core.ChainHeadEvent{Block: b}: | ||
|
|
||
| case <-p.quit: | ||
| return | ||
| } | ||
| case <-p.quit: | ||
| return | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel like this makes the "receive" followed by "send" logic a bit more clear
| select { | |
| case b := <-p.in: | |
| select { | |
| case p.out <- core.ChainHeadEvent{Block: b}: | |
| case <-p.quit: | |
| return | |
| } | |
| case <-p.quit: | |
| return | |
| } | |
| var b *types.Block | |
| select { | |
| case b = <-p.in: | |
| case <-p.quit: | |
| return | |
| } | |
| select { | |
| case p.out <- core.ChainHeadEvent{Block: b}: | |
| case <-p.quit: | |
| return | |
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a strong dislike of the "variable look-back" pattern as it causes a non-linearity in reading the code.
I'm being hyperbolic here, but this is the cognitive pattern I'm trying to avoid:
- Push
bto my brain stack without knowing how it will be used (context aids memory). - Set some
b, making sure to notice that it's=and not:=, recalling my earlier stack push. - Parse some unrelated "distraction" code, keeping
bat the back of my mind. - Use
b.
vs
- Declare and set
b, pushing it to my brain stack but with context as to where it came from. - Use
b.
Granted in this particular solution it's not all that significant because the "distraction" code is minimal, but I still feel like it's an anti-pattern. All that said, I'm open to having my mind changed if you think there are sufficient reasons to change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fwiw, using := rather than = won't compile.
Would you be okay with some helper functions instead? Something like:
func (p *pipe) loop() {
defer close(p.done)
for {
b, ok := p.read()
if !ok {
return
}
if !p.write(b) {
return
}
}
}
func (p *pipe) read() (*types.Block, bool) {
select {
case b := <-p.in:
return b, true
case <-p.quit:
return nil, false
}
}
func (p *pipe) write(b *types.Block) bool {
select {
case p.out <- core.ChainHeadEvent{Block: b}:
return true
case <-p.quit:
return false
}
}It's much more verbose... But it doesn't have your (perhaps limited to you :wink) meat stack issue
| func (bc *blockchain) StateAt(root common.Hash) (*state.StateDB, error) { | ||
| return state.New(root, bc.exec.StateCache(), nil) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, with this PR the mempool's validity checking is based on the settled state right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Intent is currently undefined. Sorry, I should have been clearer in what I meant in the PR description (i.e. I'm kicking that can down the road entirely):
Any specific rules for when to reject a tx from the mempool will likely require some libevm mods so can follow later.
The p2p architecture and gossip was all so new to me that I found this PR relatively difficult, so wanted to focus on learning it and getting the wiring correct. I'll concentrate on things like validity when we have the VM to orchestrate all the moving parts.
Introduces an integration point between
txpool.TxPool,saexec.Executorandgossip.Set, with a simple transaction-prioritisation mechanism. Any specific rules for when to reject a tx from the mempool will likely require somelibevmmods so can follow later.The consensus worst-case validation will still need to track "in-flight" txs as the
LegacyPoolwill only remove them once included. An enqueueing subscription mechanism is added for this purpose.